Language models have been shown to perform better with an increase in scale on a wide variety of tasks via the in-context learning paradigm. In this paper, we investigate the hypothesis that the ability of a large language model to in-context learn-perform a task is not uniformly spread across all of its underlying components. Using a 66 billion parameter language model (OPT-66B) across a diverse set of 14 downstream tasks, we find this is indeed the case: $\sim$70% of attention heads and $\sim$20% of feed forward networks can be removed with minimal decline in task performance. We find substantial overlap in the set of attention heads (un)important for in-context learning across tasks and number of in-context examples. We also address our hypothesis through a task-agnostic lens, finding that a small set of attention heads in OPT-66B score highly on their ability to perform primitive induction operations associated with in-context learning, namely, prefix matching and copying. These induction heads overlap with task-specific important heads, suggesting that induction heads are among the heads capable of more sophisticated behaviors associated with in-context learning. Overall, our study provides several insights that indicate large language models may be under-trained to perform in-context learning and opens up questions on how to pre-train language models to more effectively perform in-context learning.
translated by 谷歌翻译
End-to-end speech recognition models trained using joint Connectionist Temporal Classification (CTC)-Attention loss have gained popularity recently. In these models, a non-autoregressive CTC decoder is often used at inference time due to its speed and simplicity. However, such models are hard to personalize because of their conditional independence assumption that prevents output tokens from previous time steps to influence future predictions. To tackle this, we propose a novel two-way approach that first biases the encoder with attention over a predefined list of rare long-tail and out-of-vocabulary (OOV) words and then uses dynamic boosting and phone alignment network during decoding to further bias the subword predictions. We evaluate our approach on open-source VoxPopuli and in-house medical datasets to showcase a 60% improvement in F1 score on domain-specific rare words over a strong CTC baseline.
translated by 谷歌翻译
尽管受到监督的深度学习彻底改变了语音和音频处理,但它必须为个人任务和应用程序方案建立专业模型。同样,很难将其应用于仅可用标记数据的方言和语言。自我监督的代表学习方法承诺一个单一的通用模型,该模型将使各种各样的任务和领域受益。这种方法已显示出在自然语言处理和计算机视觉域中的成功,在减少许多下游场景所需的标签数量的同时,达到了新的性能水平。语音表示学习在三个主要类别中也经历了类似的进展:生成,对比和预测方法。其他方法依赖于多模式数据,用于预训练,将文本或视觉数据流与语音混合。尽管自我监督的语音表示仍然是一个新生的研究领域,但它与用零词汇资源的声学单词嵌入和学习密切相关,这两种资源已经进行了多年的积极研究。这篇评论介绍了自我监督的语音表示学习及其与其他研究领域的联系的方法。由于许多当前的方法仅集中在自动语音识别作为下游任务上,因此我们回顾了基准测试的最新努力,以将应用程序扩展到语音识别之外。
translated by 谷歌翻译
自动语音识别(ASR)系统已经发现它们在非常多样化的域中的众多工业应用中使用。由于域 - 特定于域的系统比域名评估的通用对应力更好,因此对内存和计算有效的域适应的需要是显而易见的。特别是,适用用于救援ASR假设的基于参数的基于变压器的语言模型是具有挑战性的。在这项工作中,我们引入域提示,一种方法,该方法列举了少数域令牌嵌入参数以将基于变压器的LM归入特定域。只需少数额外的额外参数,我们通过使用未存在的LM的基线达到7-14%的效率。尽管具有参数效率,但这些改进与具有数亿参数的完全精细调谐模型的改进相当。通过提示,数据集大小,初始化和域的消融,我们提供了在ASR系统中使用域提示的优势的证据。
translated by 谷歌翻译
言语分离的许多最近进步主要针对具有高重叠程度的短音频话语的合成混合物。这些数据集与真实的会话数据显着不同,因此,在这些数据集上培训和评估的模型不会概括到真实的会话方案。使用大多数这些模型用于长形式语音的另一个问题是由于时间频率掩模或置换不变训练(PIT)损耗的无监督聚类,因此是分离的语音段的非明确顺序。这导致准确地缝合用于自动语音识别(ASR)的下游任务的均匀扬声器段。在本文中,我们提出了一种扬声器调节分离器,在直接从混合信号中提取的扬声器嵌入物上训练。我们使用定向丢失训练此模型,该丢失调节分离的段的顺序。使用此模型,我们对真实会话数据的单词错误率(WER)进行了重大改进,而无需额外的重新拼接步骤。
translated by 谷歌翻译
我们介绍BERTPHONE,一个在大型语音上培训的变压器编码器,输出可以用于扬声器和语言识别的语音感知的上下文表示向量。这是通过对两个目标的培训来实现的:首先是通过调整伯特对连续领域的启发,涉及掩蔽输入框架的跨度并重建用于声学表示学习的整个序列;其次,由ASR的瓶颈特征成功的启发是应用于音素标签的序列级CTC损失,用于语音表示学习。我们预留了两种BERTPHONE型号(一个在FISHER上,一个在TED-lium上),并用它们用作两个任务的X-Vector-Sique DNN中的特征提取器。我们达到最先进的$ C _ {\ TEXT {AVG}} $ 6.16就具有挑战性的LRE07 3SEC封闭式语言识别任务。在Fisher和VoxceleB扬声器识别任务上,我们在培训BertPhone向量而不是MFCC时,我们看到扬声器EER的相对减少18%。通常,BERTPHONE在同一数据上优于先前的语音预制方法。我们在https://github.com/awslabs/speech -representations释放我们的代码和模型。
translated by 谷歌翻译
Heating in private households is a major contributor to the emissions generated today. Heat pumps are a promising alternative for heat generation and are a key technology in achieving our goals of the German energy transformation and to become less dependent on fossil fuels. Today, the majority of heat pumps in the field are controlled by a simple heating curve, which is a naive mapping of the current outdoor temperature to a control action. A more advanced control approach is model predictive control (MPC) which was applied in multiple research works to heat pump control. However, MPC is heavily dependent on the building model, which has several disadvantages. Motivated by this and by recent breakthroughs in the field, this work applies deep reinforcement learning (DRL) to heat pump control in a simulated environment. Through a comparison to MPC, it could be shown that it is possible to apply DRL in a model-free manner to achieve MPC-like performance. This work extends other works which have already applied DRL to building heating operation by performing an in-depth analysis of the learned control strategies and by giving a detailed comparison of the two state-of-the-art control methods.
translated by 谷歌翻译
Knowledge about outcomes is critical for complex event understanding but is hard to acquire. We show that by pre-identifying a participant in a complex event, crowd workers are able to (1) infer the collective impact of salient events that make up the situation, (2) annotate the volitional engagement of participants in causing the situation, and (3) ground the outcome of the situation in state changes of the participants. By creating a multi-step interface and a careful quality control strategy, we collect a high quality annotated dataset of 8K short newswire narratives and ROCStories with high inter-annotator agreement (0.74-0.96 weighted Fleiss Kappa). Our dataset, POQue (Participant Outcome Questions), enables the exploration and development of models that address multiple aspects of semantic understanding. Experimentally, we show that current language models lag behind human performance in subtle ways through our task formulations that target abstract and specific comprehension of a complex event, its outcome, and a participant's influence over the event culmination.
translated by 谷歌翻译
单词错误率(WER)是用于评估自动语音识别(ASR)模型质量的主要度量。已经表明,与典型的英语说话者相比,ASR模型的语音障碍者的扬声器往往更高。在如此高的错误率下,很难确定模型是否可以很有用。这项研究调查了BertScore的使用,BertScore是文本生成的评估指标,以提供对ASR模型质量和实用性的更有信息度量。将Bertscore和WER与语言病理学家手动注释以进行错误类型和评估手动注释的预测错误。发现Bertscore与人类的误差类型和评估评估更相关。在保留含义的拼字法变化(收缩和归一化误差)上,Bertscore特别强大。此外,使用顺序逻辑回归和Akaike的信息标准(AIC)测量,Bertscore比WER更好地评估了错误评估。总体而言,我们的发现表明,从实际角度评估ASR模型性能时,Bertscore可以补充,尤其是对于可访问性应用程序,即使模型的精度也比典型语音较低的模型也很有用。
translated by 谷歌翻译
人工智能(AI),机器学习和深度学习(DL)方法在生物医学图像分析领域变得越来越重要。但是,为了利用此类方法的全部潜力,需要作为训练数据代表数量的实验获得的图像,其中包含大量手动注释对象。在这里,我们将语法(合成数据)介绍为一种新的方法,用于生成合成,光现实和高度复杂的生物医学图像作为DL系统的训练数据。我们在组织学切片中的肌肉纤维和结缔组织分析的背景下显示了方法的多功能性。我们证明,可以在以前看不见的现实世界数据上执行强大和专家级的细分任务,而无需仅使用合成训练数据进行手动注释。作为一种完全参数技术,我们的方法为生成对抗网络(GAN)构成了可解释的可控替代方案,并且有可能在显微镜及其他地区的各种生物医学应用中显着加速定量图像分析。
translated by 谷歌翻译